As artificial intelligence tools like ChatGPT and Midjourney began to enter public consciousness, so too did a worrying trend of content called "AI slop," or sometimes just "slop."
Slop is the evolution of spam, in a way. It's low-quality content that's easy to create thanks to artificial intelligence (AI) tools. It can overwhelm social media feeds, leaving users unsure of what's real and what's not. It comes in many forms — posts on social media, of course, but also books on Amazon, music on Spotify, articles from less-than-reliable news outlets (and, unfortunately, some reliable outlets) and even occasionally in peer-reviewed scientific journals.
For instance, we have repeatedly checked claims about celebrities supposedly doing good deeds that originated with YouTube videos or Facebook pages that post slop. Former NFL quarterback Peyton Manning was a frequent focus of such stories in June and July 2025.
Animals also frequently appear in slop content. For example, we've reviewed viral videos of rabbits and raccoons jumping on trampolines that were (sadly) fake.
(No, these bouncing bunnies aren't real — TikTok user @rachelthecatlovers)
Finally, of course, Snopes has checked a litany of claims about politicians. While real photos exist of U.S. President Donald Trump and deceased sex offender Jeffrey Epstein, some are AI-generated. We've also disproved an AI-generated speech attributed to Trump and confirmed several instances in which his administration posted AI-generated content, some of which could be considered slop.
The growth of AI slop feels like an inevitable side effect of "enshittification," a word coined by writer Cory Doctorow in 2022 to describe how online platforms like Amazon and Facebook have worsened over time. "First, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die," he wrote.
Slop is not good for users or businesses
AI slop isn't designed for humans, according to an article from 404 Media. Instead, it directly targets the algorithms that decide what content to show users. That article called AI slop's underlying strategy a "brute force attack," the simplest of hacking strategies that involves just trying every possible combination one at a time until something gives. What a brute force attack lacks in efficiency, it makes up for in efficacy. Slop games the system by flooding algorithms with AI-generated content until something goes viral.
(Manning's supposed saintliness shown above — Facebook page Magic Clement / Snopes Illustration)
That's right — slop is a business.
A 2024 article in New York Magazine documented several individuals who promoted AI-generated content as a side hustle. It said "sloppers" can sell books on Amazon and use music on Spotify to receive royalties, while news articles can be hosted on cheap WordPress blogs filled with advertising links. The money can even come from social media platforms themselves — most have programs that offer creators money based on a post's engagement. Or, as the New York Magazine article described it, "a slop subsidy."
According to a separate 404 Media article, the payments generally aren't substantial, "hundreds of dollars" at most. However, that money can go a lot further in countries like India, Vietnam or the Philippines. Claims that Snopes has fact-checked and found to have originated as AI slop often have a tie to such countries — AI slop "news stories" often link to websites based in Vietnam, for instance.
(This image accompanied a false story about a drifter named Ronald McDonald murdering children across the U.S. Midwest in 1892, which supposedly inspired the McDonald's fast-food chain mascot — @buried__truths/TikTok)
The end goal for a company like Meta, according to the first 404 Media article, is to "move toward a world where a never-ending feed of hyper niche content can be delivered directly to the people who are into that type of content." That requires a massive amount of content and data collection.
AI slop's real-world impact
As AI slop increases, platforms have placed the onus of figuring out whether something is real or not on the user. Snopes has a page containing a few tips and tricks for identifying AI-generated images, but with how quickly those tools adapt, results might vary. The decision to delegate that job to users in the first place can have genuine negative consequences.
Introducing uncertainty in the form of fake AI slop can cause people to discredit legitimate information or worse, tune out entirely. As one Forbes writer described it, "When people feel they can no longer trust what they see, they may stop trying altogether. It's easier to not care than to expend the mental energy required to verify every image or story." An op-ed in The Guardian called the effect "profound disorientation."
(One story falsely claimed former NFL star Tom Brady, pictured above in an AI image, donated millions of dollars to victims of the July 2025 Texas floods — Gridiron Master / Facebook)
AI slop has also had an impact in at least one natural disaster, according to a
Republicans also used AI slop images to criticize then-President Joe Biden's response to the disaster, despite the images being AI-generated. One Republican National Committee member said "it doesn't matter" where the photo came from in response to being told one viral image was created by AI tools.
(This image shared on X didn't actually show a real girl crying and holding a puppy on a boat in the aftermath of Hurricane Helene.)
In a parody of Trump's unique style of posting, the X account of California Democratic Gov. Gavin Newsom's press office has also posted content that could be described as slop.
